hugo
, asciidoctor
and the papermod
theme, how I publish it using
nginx
, how I ve integrated the remark42
comment system and how I ve
automated its publication using gitea
and json2file-go
.
It is a long post, but I hope that at least parts of it can be interesting for
some, feel free to ignore it if that is not your case
config.yml
file is the one shown below (probably some of the
settings are not required nor being used right now, but I m including the
current file, so this post will have always the latest version of it):
Some notes about the settings:
disableHLJS
and assets.disableHLJS
are set to true
; we plan to use
rouge
on adoc
and the inclusion of the hljs
assets adds styles that
collide with the ones used by rouge
.ShowToc
is set to true
and the TocOpen
setting is set to false
to
make the ToC appear collapsed initially. My plan was to use the asciidoctor
ToC, but after trying I believe that the theme one looks nice and I don t
need to adjust styles, although it has some issues with the html5s
processor (the admonition titles use <h6>
and they are shown on the ToC,
which is weird), to fix it I ve copied the layouts/partial/toc.html
to my
site repository and replaced the range of headings to end at 5
instead of
6
(in fact 5
still seems a lot, but as I don t think I ll use that heading
level on the posts it doesn t really matter).params.profileMode
values are adjusted, but for now I ve left it disabled
setting params.profileMode.enabled
to false
and I ve set the
homeInfoParams
to show more or less the same content with the latest posts
under it (I ve added some styles to my custom.css
style sheet to center the
text and image of the first post to match the look and feel of the profile).asciidocExt
section I ve adjusted the backend
to use html5s
,
I ve added the asciidoctor-html5s
and asciidoctor-diagram
extensions to
asciidoctor
and adjusted the workingFolderCurrent
to true
to make
asciidoctor-diagram
work right (haven t tested it yet).asciidoctor
using the html5s
processor I ve added some files to
the assets/css/extended
directory:
assets/css/extended/custom.css
to
make the homeInfoParams
look like the profile page and I ve also changed a
little bit some theme styles to make things look better with the html5s
output:assets/css/extended/adoc.css
with some styles
taken from the asciidoctor-default.css
, see this
blog
post about the original file; mine is the same after formatting it with
css-beautify and editing it to use variables for
the colors to support light and dark themes:theme-vars.css
file that changes the highlighted code background color and adds the color
definitions used by the admonitions:font-awesome
, so I ve downloaded its resources for
version 4.7.0
(the one used by asciidoctor
) storing the
font-awesome.css
into on the assets/css/extended
dir (that way it is
merged with the rest of .css
files) and copying the fonts to the
static/assets/fonts/
dir (will be served directly):FA_BASE_URL="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0"
curl "$FA_BASE_URL/css/font-awesome.css" \
> assets/css/extended/font-awesome.css
for f in FontAwesome.otf fontawesome-webfont.eot \
fontawesome-webfont.svg fontawesome-webfont.ttf \
fontawesome-webfont.woff fontawesome-webfont.woff2; do
curl "$FA_BASE_URL/fonts/$f" > "static/assets/fonts/$f"
done
css
compatible with rouge
) so we need a css
to do the highlight styling; as
rouge
provides a way to export them, I ve created the
assets/css/extended/rouge.css
file with the thankful_eyes
theme:rougify style thankful_eyes > assets/css/extended/rouge.css
html5s
backend with admonitions I ve added a
variation of the example found on this
blog
post to assets/js/adoc-admonitions.js
:
and enabled its minified use on the layouts/partials/extend_footer.html
file
adding the following lines to it:
- $admonitions := slice (resources.Get "js/adoc-admonitions.js")
resources.Concat "assets/js/adoc-admonitions.js" minify fingerprint
<script defer crossorigin="anonymous" src=" $admonitions.RelPermalink "
integrity=" $admonitions.Data.Integrity "></script>
layouts/partials/comments.html
with the following content
based on the remark42
documentation, including extra code to sync the dark/light setting with the
one set on the site:
In development I use it with anonymous comments enabled, but to avoid SPAM
the production site uses social logins (for now I ve only enabled Github
& Google, if someone requests additional services I ll check them, but those
were the easy ones for me initially).
To support theme switching with remark42
I ve also added the following inside
the layouts/partials/extend_footer.html
file:
- if (not site.Params.disableThemeToggle)
<script>
/* Function to change theme when the toggle button is pressed */
document.getElementById("theme-toggle").addEventListener("click", () =>
if (typeof window.REMARK42 != "undefined")
if (document.body.className.includes('dark'))
window.REMARK42.changeTheme('light');
else
window.REMARK42.changeTheme('dark');
);
</script>
- end
theme-toggle
button is pressed we change the remark42
theme before the PaperMod
one (that s needed here only, on page loads the
remark42
theme is synced with the main one using the code from the
layouts/partials/comments.html
shown earlier).docker-compose
with the following
configuration:
To run it properly we have to create the .env
file with the current user ID
and GID on the variables APP_UID
and APP_GID
(if we don t do it the files
can end up being owned by a user that is not the same as the one running the
services):
$ echo "APP_UID=$(id -u)\nAPP_GID=$(id -g)" > .env
Dockerfile
used to generate the sto/hugo-adoc
is:
If you review it you will see that I m using the
docker-asciidoctor image as
the base; the idea is that this image has all I need to work with asciidoctor
and to use hugo
I only need to download the binary from their latest release
at github (as we are using an
image based on alpine we also need to install the
libc6-compat
package, but once that is done things are working fine for me so
far).
The image does not launch the server by default because I don t want it to; in
fact I use the same docker-compose.yml
file to publish the site in production
simply calling the container without the arguments passed on the
docker-compose.yml
file (see later).
When running the containers with docker-compose up
(or docker compose up
if
you have the docker-compose-plugin
package installed) we also launch a nginx
container and the remark42
service so we can test everything together.
The Dockerfile
for the remark42
image is the original one with an updated
version of the init.sh
script:
The updated init.sh
is similar to the original, but allows us to use an
APP_GID
variable and updates the /etc/group
file of the container so the
files get the right user and group (with the original script the group is
always 1001
):
The environment file used with remark42
for development is quite minimal:
And the nginx/default.conf
file used to publish the service locally is simple
too:
main
Debian repository:
git
to clone & pull the repository,jq
to parse json
files from shell scripts,json2file-go
to save the webhook messages to files,inotify-tools
to detect when new files are stored by json2file-go
and
launch scripts to process them,nginx
to publish the site using HTTPS and work as proxy for
json2file-go
and remark42
(I run it using a container),task-spool
to queue the scripts that update the deployment.docker
and docker compose
from the debian packages on the
docker
repository:
docker-ce
to run the containers,docker-compose-plugin
to run docker compose
(it is a plugin, so no -
in
the name).git
repository I ve created a deploy key, added it to gitea
and cloned the project on the /srv/blogops
PATH (that route is owned by a
regular user that has permissions to run docker
, as I said before).hugo
To compile the site we are using the docker-compose.yml
file seen before, to
be able to run it first we build the container images and once we have them we
launch hugo
using docker compose run
:
$ cd /srv/blogops
$ git pull
$ docker compose build
$ if [ -d "./public" ]; then rm -rf ./public; fi
$ docker compose run hugo --
/srv/blogops/public
(we remove the
directory first because hugo
does not clean the destination folder as
jekyll
does).
The deploy script re-generates the site as described and moves the public
directory to its final place for publishing.remark42
with dockerOn the /srv/blogops/remark42
folder I have the following docker-compose.yml
:
The ../.env
file is loaded to get the APP_UID
and APP_GID
variables that
are used by my version of the init.sh
script to adjust file permissions and
the env.prod
file contains the rest of the settings for remark42
, including
the social network tokens (see the
remark42 documentation for
the available parameters, I don t include my configuration here because some of
them are secrets).nginx
configuration for the blogops.mixinet.net
site is as simple as:
server
listen 443 ssl http2;
server_name blogops.mixinet.net;
ssl_certificate /etc/letsencrypt/live/blogops.mixinet.net/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/blogops.mixinet.net/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
access_log /var/log/nginx/blogops.mixinet.net-443.access.log;
error_log /var/log/nginx/blogops.mixinet.net-443.error.log;
root /srv/blogops/nginx/public_html;
location /
try_files $uri $uri/ =404;
include /srv/blogops/nginx/remark42.conf;
server
listen 80 ;
listen [::]:80 ;
server_name blogops.mixinet.net;
access_log /var/log/nginx/blogops.mixinet.net-80.access.log;
error_log /var/log/nginx/blogops.mixinet.net-80.error.log;
if ($host = blogops.mixinet.net)
return 301 https://$host$request_uri;
return 404;
/srv/blogops/nginx/public_html
and not on /srv/blogops/public
; the reason
for that is that I want to be able to compile without affecting the running
site, the deployment script generates the site on /srv/blogops/public
and if
all works well we rename folders to do the switch, making the change feel almost
atomic.gitea
at my home and the VM where the blog is served, I m
going to configure the json2file-go
to listen for connections on a high port
using a self signed certificate and listening on IP addresses only reachable
through the VPN.
To do it we create a systemd socket
to run json2file-go
and adjust its
configuration to listen on a private IP (we use the FreeBind
option on its
definition to be able to launch the service even when the IP is not available,
that is, when the VPN is down).
The following script can be used to set up the json2file-go
configuration:
mkcert
to create the temporary certificates, to install the
package on bullseye
the backports
repository must be available.json2file-go
server we go to the project and enter into
the hooks/gitea/new
page, once there we create a new webhook of type gitea
and set the target URL to https://172.31.31.1:4443/blogops
and on the secret
field we put the token generated with uuid
by the setup script:
sed -n -e 's/blogops://p' /etc/json2file-go/dirlist
*
webhook
section of the app.ini
of our gitea
server allows us to call the IP and skips the TLS verification (you can see the
available options on the
gitea
documentation).
The [webhook]
section of my server looks like this:
[webhook]
ALLOWED_HOST_LIST=private
SKIP_TLS_VERIFY=true
webhook
configured we can try it and if it works our
json2file
server will store the file on the
/srv/blogops/webhook/json2file/blogops/
folder.gitea
and store the messages on files, but we have to do something to
process those files once they are saved in our machine.
An option could be to use a cronjob
to look for new files, but we can do
better on Linux using inotify
we will use the inotifywait
command from
inotify-tools
to watch the json2file
output directory and execute a script
each time a new file is moved inside it or closed after writing
(IN_CLOSE_WRITE
and IN_MOVED_TO
events).
To avoid concurrency problems we are going to use task-spooler
to launch the
scripts that process the webhooks using a queue of length 1, so they are
executed one by one in a FIFO queue.
The spooler script is this:
To run it as a daemon we install it as a systemd service
using the following
script:
hugo
with docker
compose
,apt
on Debian or Ubuntu can and this project integrates all CRAN packages (plus 200+ BioConductor packages). It will work with any Ubuntu installation on laptop, desktop, server, cloud, container, or in WSL2 (but is limited to Intel/AMD chips, sorry Raspberry Pi or M1 laptop). It covers all of CRAN (or nearly 19k packages), all the BioConductor packages depended-upon (currently over 200), and only excludes less than a handful of CRAN packages that cannot be built.
bspm
The r2u setup can be used directly with apt
(or dpkg
or any other frontend to the package management system). Once installed apt update; apt upgrade
will take care of new packages. For this to work, all CRAN packages (and all BioConductor packages depended upon) are mapped to names like r-cran-rcpp
and r-bioc-s4vectors
: an r prefix, the repo, and the package name, all lower-cased. That works but thanks to the wonderful bspm package by I aki car we can do much better. It connects R s own install.packages()
and update.packages()
to apt
. So we can just say (as the demos above show) install.packages("tidyverse")
or install.packages("brms")
and binaries are installed via apt
which is fantastic and it connects R to the system package manager. The setup is really only two lines and described at the r2u site as part of the setup.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
Needs to be able to create two copies always. Can get stuck in irreversible read-only mode if only one copy can be made.Even as of now, RAID-1 and RAID-10 has this note:
The simple redundancy RAID levels utilize different mirrors in a way that does not achieve the maximum performance. The logic can be improved so the reads will spread over the mirrors evenly or based on device congestion.Granted, that's not a stability concern anymore, just performance. A reviewer of a draft of this article actually claimed that BTRFS only reads from one of the drives, which hopefully is inaccurate, but goes to show how confusing all this is. There are other warnings in the Debian wiki that are quite scary. Even the legendary Arch wiki has a warning on top of their BTRFS page, still. Even if those issues are now fixed, it can be hard to tell when they were fixed. There is a changelog by feature but it explicitly warns that it doesn't know "which kernel version it is considered mature enough for production use", so it's also useless for this. It would have been much better if BTRFS was released into the world only when those bugs were being completely fixed. Or that, at least, features were announced when they were stable, not just "we merged to mainline, good luck". Even now, we get mixed messages even in the official BTRFS documentation which says "The Btrfs code base is stable" (main page) while at the same time clearly stating unstable parts in the status page (currently RAID56). There are much harsher BTRFS critics than me out there so I will stop here, but let's just say that I feel a little uncomfortable trusting server data with full RAID arrays to BTRFS. But surely, for a workstation, things should just work smoothly... Right? Well, let's see the snags I hit.
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 931,5G 0 disk
sda1 8:1 0 200M 0 part /boot/efi
sda2 8:2 0 1G 0 part /boot
sda3 8:3 0 7,8G 0 part
fedora_swap 253:5 0 7.8G 0 crypt [SWAP]
sda4 8:4 0 922,5G 0 part
fedora_crypt 253:4 0 922,5G 0 crypt /
(This might not entirely be accurate: I rebuilt this from the Debian
side of things.)
This is pretty straightforward, except for the swap partition:
normally, I just treat swap like any other logical volume and create
it in a logical volume. This is now just speculation, but I bet it was
setup this way because "swap" support was only added in BTRFS 5.0.
I fully expect BTRFS experts to yell at me now because this is an old
setup and BTRFS is so much better now, but that's exactly the point
here. That setup is not that old (2018? old? really?), and migrating
to a new partition scheme isn't exactly practical right now. But let's
move on to more practical considerations.
/dev/nvme0n1
and nvme1n1
/dev/md1
vg_tbbuild05
(multiple PVs can be added to a single VG which is
why there is that abstraction)NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0 0 1.7T 0 disk
nvme0n1p1 259:1 0 8M 0 part
nvme0n1p2 259:2 0 512M 0 part
md0 9:0 0 511M 0 raid1 /boot
nvme0n1p3 259:3 0 1.7T 0 part
md1 9:1 0 1.7T 0 raid1
crypt_dev_md1 253:0 0 1.7T 0 crypt
vg_tbbuild05-root 253:1 0 30G 0 lvm /
vg_tbbuild05-swap 253:2 0 125.7G 0 lvm [SWAP]
vg_tbbuild05-srv 253:3 0 1.5T 0 lvm /srv
nvme0n1p4 259:4 0 1M 0 part
I stripped the other nvme1n1
disk because it's basically the same.
Now, if we look at my BTRFS-enabled workstation, which doesn't even
have RAID, we have the following:
/dev/sda
with, again, /dev/sda4
being where BTRFS livesfedora_crypt
, which is, confusingly, kind of like a
volume group. it's where everything lives. i think.home
, root
, /
, etc. those are actually the things
that get mounted. you'd think you'd mount a filesystem, but no, you
mount a subvolume. that is backwards.lsblk
:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 931,5G 0 disk
sda1 8:1 0 200M 0 part /boot/efi
sda2 8:2 0 1G 0 part /boot
sda3 8:3 0 7,8G 0 part [SWAP]
sda4 8:4 0 922,5G 0 part
fedora_crypt 253:4 0 922,5G 0 crypt /srv
Notice how we don't see all the BTRFS volumes here? Maybe it's because
I'm mounting this from the Debian side, but lsblk
definitely gets
confused here. I frankly don't quite understand what's going on, even
after repeatedly looking around the rather dismal
documentation. But that's what I gather from the following
commands:
root@curie:/home/anarcat# btrfs filesystem show
Label: 'fedora' uuid: 5abb9def-c725-44ef-a45e-d72657803f37
Total devices 1 FS bytes used 883.29GiB
devid 1 size 922.47GiB used 916.47GiB path /dev/mapper/fedora_crypt
root@curie:/home/anarcat# btrfs subvolume list /srv
ID 257 gen 108092 top level 5 path home
ID 258 gen 108094 top level 5 path root
ID 263 gen 108020 top level 258 path root/var/lib/machines
I only got to that point through trial and error. Notice how I use an
existing mountpoint to list the related subvolumes. If I try to use
the filesystem path, the one that's listed in filesystem show
, I
fail:
root@curie:/home/anarcat# btrfs subvolume list /dev/mapper/fedora_crypt
ERROR: not a btrfs filesystem: /dev/mapper/fedora_crypt
ERROR: can't access '/dev/mapper/fedora_crypt'
Maybe I just need to use the label? Nope:
root@curie:/home/anarcat# btrfs subvolume list fedora
ERROR: cannot access 'fedora': No such file or directory
ERROR: can't access 'fedora'
This is really confusing. I don't even know if I understand this
right, and I've been staring at this all afternoon. Hopefully, the
lazyweb will correct me eventually.
(As an aside, why are they called "subvolumes"? If something is a
"sub" of "something else", that "something else" must exist
right? But no, BTRFS doesn't have "volumes", it only has
"subvolumes". Go figure. Presumably the filesystem still holds "files"
though, at least empirically it doesn't seem like it lost anything so
far.
In any case, at least I can refer to this section in the future, the
next time I fumble around the btrfs
commandline, as I surely will. I
will possibly even update this section as I get better at it, or based
on my reader's judicious feedback.
/etc/fstab
,
on the Debian side of things:
UUID=5abb9def-c725-44ef-a45e-d72657803f37 /srv btrfs defaults 0 2
This thankfully ignores all the subvolume nonsense because it relies
on the UUID. mount
tells me that's actually the "root" (? /
?)
subvolume:
root@curie:/home/anarcat# mount grep /srv
/dev/mapper/fedora_crypt on /srv type btrfs (rw,relatime,space_cache,subvolid=5,subvol=/)
Let's see if I can mount the other volumes I have on there. Remember
that subvolume list
showed I had home
, root
, and
var/lib/machines
. Let's try root
:
mount -o subvol=root /dev/mapper/fedora_crypt /mnt
Interestingly, root
is not the same as /
, it's a different
subvolume! It seems to be the Fedora root (/
, really) filesystem. No
idea what is happening here. I also have a home
subvolume, let's
mount it too, for good measure:
mount -o subvol=home /dev/mapper/fedora_crypt /mnt/home
Note that lsblk
doesn't notice those two new mountpoints, and that's
normal: it only lists block devices and subvolumes (rather
inconveniently, I'd say) do not show up as devices:
root@curie:/home/anarcat# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 931,5G 0 disk
sda1 8:1 0 200M 0 part
sda2 8:2 0 1G 0 part
sda3 8:3 0 7,8G 0 part
sda4 8:4 0 922,5G 0 part
fedora_crypt 253:4 0 922,5G 0 crypt /srv
This is really, really confusing. Maybe I did something wrong in the
setup. Maybe it's because I'm mounting it from outside Fedora. Either
way, it just doesn't feel right.
root@curie:/home/anarcat# df -h /srv /mnt /mnt/home
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/fedora_crypt 923G 886G 31G 97% /srv
/dev/mapper/fedora_crypt 923G 886G 31G 97% /mnt
/dev/mapper/fedora_crypt 923G 886G 31G 97% /mnt/home
(Notice, in passing, that it looks like the same filesystem is mounted
in different places. In that sense, you'd expect /srv
and /mnt
(and /mnt/home
?!) to be exactly the same, but no: they are entirely
different directory structures, which I will not call "filesystems"
here because everyone's head will explode in sparks of confusion.)
Yes, disk space is shared (that's the Size
and Avail
columns,
makes sense). But nope, no cookie for you: they all have the same
Used
columns, so you need to actually walk the entire filesystem to
figure out what each disk takes.
(For future reference, that's basically:
root@curie:/home/anarcat# time du -schx /mnt/home /mnt /srv
124M /mnt/home
7.5G /mnt
875G /srv
883G total
real 2m49.080s
user 0m3.664s
sys 0m19.013s
And yes, that was painfully slow.)
ZFS actually has some oddities in that regard, but at least it tells
me how much disk each volume (and snapshot) takes:
root@tubman:~# time df -t zfs -h
Filesystem Size Used Avail Use% Mounted on
rpool/ROOT/debian 3.5T 1.4G 3.5T 1% /
rpool/var/tmp 3.5T 384K 3.5T 1% /var/tmp
rpool/var/spool 3.5T 256K 3.5T 1% /var/spool
rpool/var/log 3.5T 2.0G 3.5T 1% /var/log
rpool/home/root 3.5T 2.2G 3.5T 1% /root
rpool/home 3.5T 256K 3.5T 1% /home
rpool/srv 3.5T 80G 3.5T 3% /srv
rpool/var/cache 3.5T 114M 3.5T 1% /var/cache
bpool/BOOT/debian 571M 90M 481M 16% /boot
real 0m0.003s
user 0m0.002s
sys 0m0.000s
That's 56360 times faster, by the way.
But yes, that's not fair: those in the know will know there's a
different command to do what df
does with BTRFS filesystems, the
btrfs filesystem usage
command:
root@curie:/home/anarcat# time btrfs filesystem usage /srv
Overall:
Device size: 922.47GiB
Device allocated: 916.47GiB
Device unallocated: 6.00GiB
Device missing: 0.00B
Used: 884.97GiB
Free (estimated): 30.84GiB (min: 27.84GiB)
Free (statfs, df): 30.84GiB
Data ratio: 1.00
Metadata ratio: 2.00
Global reserve: 512.00MiB (used: 0.00B)
Multiple profiles: no
Data,single: Size:906.45GiB, Used:881.61GiB (97.26%)
/dev/mapper/fedora_crypt 906.45GiB
Metadata,DUP: Size:5.00GiB, Used:1.68GiB (33.58%)
/dev/mapper/fedora_crypt 10.00GiB
System,DUP: Size:8.00MiB, Used:128.00KiB (1.56%)
/dev/mapper/fedora_crypt 16.00MiB
Unallocated:
/dev/mapper/fedora_crypt 6.00GiB
real 0m0,004s
user 0m0,000s
sys 0m0,004s
Almost as fast as ZFS's df! Good job. But wait. That doesn't actually
tell me usage per subvolume. Notice it's filesystem usage
, not
subvolume usage
, which unhelpfully refuses to exist. That command
only shows that one "filesystem" internal statistics that are pretty
opaque.. You can also appreciate that it's wasting 6GB of
"unallocated" disk space there: I probably did something Very Wrong
and should be punished by Hacker News. I also wonder why it has 1.68GB
of "metadata" used...
At this point, I just really want to throw that thing out of the
window and restart from scratch. I don't really feel like learning the
BTRFS internals, as they seem oblique and completely bizarre to me. It
feels a little like the state of PHP now: it's actually pretty solid,
but built upon so many layers of cruft that I still feel it corrupts
my brain every time I have to deal with it (needle or haystack first?
anyone?)...
if
/else
choice can be hard-coded, instead of being run-time evaluated every time. Such branches can be updated too (the kernel just rewrites the code to switch around the branch ). All these principles apply to static calls as well, but they re for replacing indirect function calls (i.e. a call through a function pointer) with a direct call (i.e. a hard-coded call address). This eliminates the need for Spectre mitigations (e.g. RETPOLINE) for these indirect calls, and avoids a memory lookup for the pointer. For hot-path code (like the scheduler), this has a measurable performance impact. It also serves as a kind of Control Flow Integrity implementation: an indirect call got removed, and the potential destinations have been explicitly identified at compile-time.
network RNG improvementsCAP_SETGID
(instead of to just any group), providing a way to keep the power of granting this capability much more limited. (This isn t complete yet, though, since handling setgroups()
is still needed.)
improve kernel s internal checking of file contentsset_fs()
, Christoph Hellwig made it possible for set_fs() to be optional for an architecture. Subsequently, he then removed set_fs()
entirely for x86, riscv, and powerpc. These architectures will now be free from the entire class of kernel address limit attacks that only needed to corrupt a single value in struct thead_info
.
sysfs_emit() replaces sprintf() in /syssprintf()
and snprintf()
in /sys
handlers by creating a new helper, sysfs_emit()
. This will handle the cases where kernel code was not correctly dealing with the length results from sprintf()
calls, which might lead to buffer overflows in the PAGE_SIZE
buffer that /sys
handlers operate on. With the helper in place, it was possible to start the refactoring of the many sprintf()
callers.
nosymfollow mount optionnosymfollow
mount option. This entirely disables symlink resolution for the given filesystem, similar to other mount options where noexec
disallows execve()
, nosuid
disallows setid bits, and nodev
disallows device files. Quoting the patch, it is useful as a defensive measure for systems that need to deal with untrusted file systems in privileged contexts. (i.e. for when /proc/sys/fs/protected_symlinks
isn t a big enough hammer.) Chrome OS uses this option for its stateful filesystem, as symlink traversal as been a common attack-persistence vector.
ARMv8.5 Memory Tagging Extension support-Warray-bounds
compiler flag and clear the path for saner bounds checking of array indexes and memcpy()
usage.
That s it for now! Please let me know if you think anything else needs some attention. Next up is Linux v5.11.
2022, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 License.
debian/changelog
at least:
procmail (3.22-1) unstable; urgency=low
* New upstream release, which uses the standard' format for Maildir
filenames and retries on name collision. It also contains some
bug fixes from the 3.23pre snapshot dated 2001-09-13.
* Removed sendmail' from the Recommends field, since we already
have exim' (the default Debian MTA) and mail-transport-agent'.
* Removed suidmanager support. Conflicts: suidmanager (<< 0.50).
* Added support for DEB_BUILD_OPTIONS in the source package.
* README.Maildir: Do not use locking on the example recipe,
since it's wrong to do so in this case.
-- Santiago Vila <sanvila@debian.org> Wed, 21 Nov 2001 09:40:20 +0100
All Debian suites from buster onwards ship the 3.22-26 release,
although the maintainer just pushed a 3.22-27 release to fix a seven
year old null pointer dereference, after this article was drafted.
Procmail is also shipped in all major distributions: Fedora
and its derivatives, Debian derivatives, Gentoo, Arch,
FreeBSD, OpenBSD. We all seem to be ignoring this problem.
The upstream website (http://procmail.org/) has been down since
about 2015, according to Debian bug #805864, with no change
since.
In effect, every distribution is currently maintaining its fork of
this dead program.
Note that, after filing a bug to keep Debian from shipping
procmail in a stable release again, I was told that the
Debian maintainer is apparently in contact with the upstream. And,
surprise! they still plan to release that fabled 3.23 release, which
has been now in "pre-release" for all those twenty years.
In fact, it turns out that 3.23 is considered released already, and
that the procmail author actually pushed a 3.24 release, codenamed
"Two decades of fixes". That amounts to 25 commits since 3.23pre
some of which address serious security issues, but none of which
address fundamental issues with the code base.
root:mail
in
Debian. There's no debconf
or pre-seed setting that can change
this. There has been two bug reports against the Debian to make this
configurable (298058, 264011), but both were closed to say
that, basically, you should use dpkg-statoverride
to change the
permissions on the binary.
So if anything, you should immediately run this command on any host
that you have procmail
installed on:
dpkg-statoverride --update --add root root 0755 /usr/bin/procmail
Note that this might break email delivery. It might also not work at
all, thanks to usrmerge. Not sure. Yes, everything is on
fire. This is fine.
In my opinion, even assuming we keep procmail in Debian, that default
should be reversed. It should be up to people installing procmail to
assign it those dangerous permissions, after careful consideration of
the risk involved.
The last maintainer of procmail explicitly advised us (in that null
pointer dereference bug) and other projects (e.g. OpenBSD, in [2])
to stop shipping it, back in 2014. Quote:
Executive summary: delete the procmail port; the code is not safe and should not be used as a basis for any further work.I just read some of the code again this morning, after the original author claimed that procmail was active again. It's still littered with bizarre macros like:
#define bit_set(name,which,value) \
(value?(name[bit_index(which)] =bit_mask(which)):\
(name[bit_index(which)]&=~bit_mask(which)))
... from regexp.c, line 66 (yes, that's a custom regex
engine). Or this one:
#define jj (aleps.au.sopc)
It uses insecure functions like strcpy
extensively. malloc()
is thrown around goto
s like it's 1984 all over again. (To be fair,
it has been feeling like 1984 a lot lately, but that's another matter
entirely.)
That null pointer deref bug? It's fixed upstream now, in this
commit merged a few hours ago, which I presume might be in response
to my request to remove procmail from Debian.
So while that's nice, this is the just tip of the iceberg. I speculate
that one could easily find an exploitable crash in procmail if only by
running it through a fuzzer. But I don't need to speculate: procmail
had, for years, serious security issues that could possibly lead to
root privilege escalation, remotely exploitable if procmail is (as
it's designed to do) exposed to the network.
Maybe I'm overreacting. Maybe the procmail author will go through the
code base and do a proper rewrite. But I don't think that's what is in
the cards right now. What I expect will happen next is that people
will start fuzzing procmail, throw an uncountable number of bug
reports at it which will get fixed in a trickle while never fixing the
underlying, serious design flaws behind procmail.
procmail(1)
itself are typically part of mail
servers. For example, Dovecot has its own LDA which implements
the standard Sieve language (RFC 5228). (Interestingly, Sieve
was published as RFC 3028 in 2001, before procmail was formally
abandoned.)
Courier also has "maildrop" which has its own filtering mechanism,
and there is fdm (2007) which is a fetchmail and procmail
replacement. Update: there's also mailprocessing, which is not
an LDA, but processing an existing folder. It was, however,
specifically designed to replace complex Procmail rules.
But procmail, of course, doesn't just ship procmail; that would just
be too easy. It ships mailstat(1)
which we could probably ignore
because it only parses procmail log files. But more importantly, it
also ships:
lockfile(1)
- conditional semaphore-file creatorformail(1)
- mail (re)formatterlockfile(1)
already has a somewhat acceptable replacement in the form of
flock(1)
, part of util-linux (which is Essential, so installed on
any normal Debian system). It might not be a direct drop-in
replacement, but it should be close enough.
formail(1)
is similar: the courier maildrop
package ships
reformail(1)
which is, presumably, a rewrite of formail. It's
unclear if it's a drop-in replacement, but it should probably possible
to port uses of formail to it easily.
Update: theThe real challenge is, of course, migrating those oldmaildrop
package ships a SUID root binary (two, even). So if you want onlyreformail(1)
, you might want to disable that with:It would be perhaps better to havedpkg-statoverride --update --add root root 0755 /usr/bin/lockmail.maildrop dpkg-statoverride --update --add root root 0755 /usr/bin/maildrop
reformail(1)
as a separate package, see bug 1006903 for that discussion.
.procmailrc
recipes to Sieve (basically). I added a few examples in the appendix
below. You might notice the Sieve examples are easier to read, which
is a nice added bonus.
procmail
installed everywhere, possibly because userdir-ldap was using
it for lockfile
until 2019. I sent a patch to fix that and scrambled
to remove get rid of procmail everywhere. That took about a day.
But many other sites are now in that situation, possibly not imagining
they have this glaring security hole in their infrastructure.
~/.dovecot.sieve
.
user+foo@example.com
to the folder
foo
. You might write something like this in procmail:
MAILDIR=$HOME/Maildir/
DEFAULT=$MAILDIR
LOGFILE=$HOME/.procmail.log
VERBOSE=off
EXTENSION=$1 # Need to rename it - ?? does not like $1 nor 1
:0
* EXTENSION ?? [a-zA-Z0-9]+
.$EXTENSION/
That, in sieve language, would be:
require ["variables", "envelope", "fileinto", "subaddress"];
########################################################################
# wildcard +extension
# https://doc.dovecot.org/configuration_manual/sieve/examples/#plus-addressed-mail-filtering
if envelope :matches :detail "to" "*"
# Save name in $ name in all lowercase
set :lower "name" "$ 1 ";
fileinto "$ name ";
stop;
Subject:
line having FreshPorts
in it into the freshports
folder, and mails from alternc.org
mailing lists into the alternc
folder:
:0
## mailing list freshports
* ^Subject.*FreshPorts.*
.freshports/
:0
## mailing list alternc
* ^List-Post.*mailto:.*@alternc.org.*
.alternc/
Equivalent Sieve:
if header :contains "subject" "FreshPorts"
fileinto "freshports";
elsif header :contains "List-Id" "alternc.org"
fileinto "alternc";
:0
* ^Subject: Cron
* ^From: .*root@
.rapports/
Would look something like this in Sieve:
if header :comparator "i;octet" :contains "Subject" "Cron"
if header :regex :comparator "i;octet" "From" ".*root@"
fileinto "rapports";
Note that this is what the automated converted does (below). It's not
very readable, but it works.
if header :contains "Precedence" "bulk"
fileinto "bulk";
if exists "List-Id"
fileinto "lists";
if anyof (header :contains "from" "example.com",
header :contains ["to", "cc"] "anarcat@example.com")
fileinto "example";
You can even pile up a bunch of options together to have one big rule
with multiple patterns:
if anyof (exists "X-Cron-Env",
header :contains ["subject"] ["security run output",
"monthly run output",
"daily run output",
"weekly run output",
"Debian Package Updates",
"Debian package update",
"daily mail stats",
"Anacron job",
"nagios",
"changes report",
"run output",
"[Systraq]",
"Undelivered mail",
"Postfix SMTP server: errors from",
"backupninja",
"DenyHosts report",
"Debian security status",
"apt-listchanges"
],
header :contains "Auto-Submitted" "auto-generated",
envelope :contains "from" ["nagios@",
"logcheck@",
"root@"])
fileinto "rapports";
0.4.7.3-alpha
of Tor can now build reproducible tarballs via the make dist-reprod
command. This issue was tracked via Tor issue #26299.
mkimg
and makefs
commands:
Fabian s original post generated a short back-and-forth with Chris Lamb regarding how diffoscope might be able to support the particular format of images generated by this command set.After rebasing ElectroBSD from FreeBSD stable/11 to stable/12
I recently noticed that the "memstick" images are unfortunately
still not 100% reproducible.
195
, 196
, 197
and 198
to Debian, as well as made the following changes:
.dsc
field values. [ ]/usr/lib/x86_64-linux-gnu
to our local binary search path. [ ]has_same_content_as
logging calls. [ ]token
variable with an anonymously-named variable instead to remove extra lines. [ ].pyc
files. This fixes test failures on big-endian machines. [ ]binary-with-bad-dynamic-table
. [ ]Enhances
field in debian/control
. [ ]GNU_BUILD_ID
field has been modified [ ]. Thank you for your contributions!
build_path_identifiers_in_documentation_generated_by_doxygen
non_deterministic_doc_base_file_for_javadoc
nondeterministic_ordering_in_guile_binaries
1.13.0-1
was uploaded to Debian unstable by Holger Levsen. It included contributions already covered in previous months as well as new ones from Mattia Rizzolo, particularly that the dh_strip_nondeterminism
Debian integration interface uses the new get_non_binnmu_date_epoch()
utility when available: this is important to ensure that strip-nondeterminism does not break some kinds of binNMUs.
gnome-desktop
package reproducible.
g++7/rsync
(randomness in output)python-eventlet
(build fails in the future)python-PyQRCode
(incorporates copyright year)apbs
.binutils-riscv64-unknown-elf
.gcc-riscv64-unknown-elf
.userbindmount
.nanomsg
.freediameter
.gr-satellites
.kjs
.xeus-python
.libime
.fcitx5-gtk
.fcitx
.libpodofo
.meshlab
.eiskaltdcpp
.editorconfig-core
.python-parse-type
.sphinx-copybutton
.fcitx5-qt
.libxmlb
, fixed by Richard Hughes. Waiting for an upstream release./boot
partition size. [ ]apt-daily
and apt-daily-upgrade
services [ ], failed e2scrub_all.service
& user@
systemd units [ ][ ] as well as generic build failures [ ].arm64
architecture nodes hosted at/by codethink.co.uk. [ ]rb-general@lists.reproducible-builds.org
#reproducible-builds
on irc.oftc.net
.
:wq
for today.
vimrc
contained not much more than preferences for indentation and how to
visually indicate white space characters like tabs.
Last but not least, I've used a single colour scheme for most of that time:
Zenburn.
In 20151 I started exploring a few Vim plugins2. To manage
them, I started by choosing a plugin manager,
Pathogen3. Recently I
noticed that the plugin's author, Tim Pope, now recommends new users just use
Vim's built in package management instead. I got curious about Tim Pope's
other plugins: He has written a great deal of them.
Given I've spent most of two decades with a barely-configured Vim, you can
imagine that I don't want to radically alter the way it works, and so I did
not expect to want to use a lot of plugins. The utility of a plugin would have
to outweight the disadvantages of coming to rely on one. But when browsing
Tim's plugins, time and time again, I found myself reacting to description
with How did I manage without this?. And so, I've ended up installing all
of the following, all by Tim Pope:
ga
YYYY-MM-DD
) properly with the increment/decrement functions (<C-A>
, <C-X>
):cnext
and :cprevious
$ ip link add vevx0a type veth peer name vevx0z $ ip addr add 169.254.0.2/31 dev vevx0a $ ip addr add 169.254.0.3/31 dev vevx0z $ ip link add vxlan0 type vxlan id 42 \ local 169.254.0.2 dev vevx0a dstport 4789 $ # Note the above 'dev' and 'local' ip are set here $ ip addr add 10.10.10.1/24 dev vxlan0results in vxlan0 listening on all interfaces, not just
vevx0z
or vevx0a
. To prove it to myself, I spun up a docker container (using a completely different network bridge with no connection to any of the interfaces above), and ran a Go program to send VXLAN UDP packets to my bridge host:
$ docker run -it --rm -v $(pwd):/mnt debian:unstable /mnt/spam 172.17.0.1:4789 $which results in packets getting injected into my vxlan interface
$ sudo tcpdump -e -i vxlan0 tcpdump: verbose output suppressed, use -v[v]... for full protocol decode listening on vxlan0, link-type EN10MB (Ethernet), snapshot length 262144 bytes 21:30:15.746754 de:ad:be:ef:00:01 (oui Unknown) > Broadcast, ethertype IPv4 (0x0800), length 64: truncated-ip - 27706 bytes missing! 33.0.0.0 > localhost: ip-proto-114 21:30:15.746773 de:ad:be:ef:00:01 (oui Unknown) > Broadcast, ethertype IPv4 (0x0800), length 64: truncated-ip - 27706 bytes missing! 33.0.0.0 > localhost: ip-proto-114 21:30:15.746787 de:ad:be:ef:00:01 (oui Unknown) > Broadcast, ethertype IPv4 (0x0800), length 64: truncated-ip - 27706 bytes missing! 33.0.0.0 > localhost: ip-proto-114 21:30:15.746801 de:ad:be:ef:00:01 (oui Unknown) > Broadcast, ethertype IPv4 (0x0800), length 64: truncated-ip - 27706 bytes missing! 33.0.0.0 > localhost: ip-proto-114 21:30:15.746815 de:ad:be:ef:00:01 (oui Unknown) > Broadcast, ethertype IPv4 (0x0800), length 64: truncated-ip - 27706 bytes missing! 33.0.0.0 > localhost: ip-proto-114 21:30:15.746827 de:ad:be:ef:00:01 (oui Unknown) > Broadcast, ethertype IPv4 (0x0800), length 64: truncated-ip - 27706 bytes missing! 33.0.0.0 > localhost: ip-proto-114 21:30:15.746870 de:ad:be:ef:00:01 (oui Unknown) > Broadcast, ethertype IPv4 (0x0800), length 64: truncated-ip - 27706 bytes missing! 33.0.0.0 > localhost: ip-proto-114 21:30:15.746885 de:ad:be:ef:00:01 (oui Unknown) > Broadcast, ethertype IPv4 (0x0800), length 64: truncated-ip - 27706 bytes missing! 33.0.0.0 > localhost: ip-proto-114 21:30:15.746899 de:ad:be:ef:00:01 (oui Unknown) > Broadcast, ethertype IPv4 (0x0800), length 64: truncated-ip - 27706 bytes missing! 33.0.0.0 > localhost: ip-proto-114 21:30:15.746913 de:ad:be:ef:00:01 (oui Unknown) > Broadcast, ethertype IPv4 (0x0800), length 64: truncated-ip - 27706 bytes missing! 33.0.0.0 > localhost: ip-proto-114 10 packets captured 10 packets received by filter 0 packets dropped by kernel(the program in question is the following:)
package main import ( "net" "os" "github.com/mdlayher/ethernet" "github.com/mdlayher/vxlan" ) func main() conn, err := net.Dial("udp", os.Args[1]) if err != nil panic(err) for i := 0; i < 10; i++ vxf := &vxlan.Frame VNI: vxlan.VNI(42), Ethernet: ðernet.Frame Source: net.HardwareAddr 0xDE, 0xAD, 0xBE, 0xEF, 0x00, 0x01 , Destination: net.HardwareAddr 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF , EtherType: ethernet.EtherTypeIPv4, Payload: []byte("Hello, World!"), , frb, err := vxf.MarshalBinary() if err != nil panic(err) _, err = conn.Write(frb) if err != nil panic(err)When using vxlan, be absolutely sure all hosts that can address any interface on the host are authorized to send arbitrary packets into any VLAN that box can send to, or there s very careful and specific controls and firewalling. Note this includes public interfaces (e.g., dual-homed private network / internet boxes), or any type of dual-homing (VPNs, etc).
https://lists.debian.org/debian-vote/2021/11/msg00034.htmlPrevious drafts and resulting discussion are at:
https://lists.debian.org/debian-vote/2021/10/msg00002.htmland also see the discussion thread starting here:
https://lists.debian.org/debian-vote/2021/09/msg00010.html
https://lists.debian.org/debian-vote/2021/10/msg00019.html
Server refused public-key signature despite accepting key!
I turned up debugging on the server and recorded:
Sep 20 13:10:32 container001 sshd[1647842]: Accepted key RSA SHA256:t3DVS5wZmO7DVwqFc41AvwgS5gx1jDWnR89apGmFpf4 found at /home/XXXXXXXXX/.ssh/authorized_keys:6
Sep 20 13:10:32 container001 sshd[1647842]: debug1: restore_uid: 0/0
Sep 20 13:10:32 container001 sshd[1647842]: Postponed publickey for XXXXXXXXX from xxx.xxx.xxx.xxx port 63579 ssh2 [preauth]
Sep 20 13:10:33 container001 sshd[1647842]: debug1: userauth-request for user XXXXXXXXX service ssh-connection method publickey [preauth]
Sep 20 13:10:33 container001 sshd[1647842]: debug1: attempt 2 failures 0 [preauth]
Sep 20 13:10:33 container001 sshd[1647842]: debug1: temporarily_use_uid: 1000/1000 (e=0/0)
Sep 20 13:10:33 container001 sshd[1647842]: debug1: trying public key file /home/XXXXXXXXX/.ssh/authorized_keys
Sep 20 13:10:33 container001 sshd[1647842]: debug1: fd 5 clearing O_NONBLOCK
Sep 20 13:10:33 container001 sshd[1647842]: debug1: /home/XXXXXXXXX/.ssh/authorized_keys:6: matching key found: RSA SHA256:t3DVS5wZmO7DVwqFc41AvwgS5gx1jDWnR89apGmFpf4
Sep 20 13:10:33 container001 sshd[1647842]: debug1: /home/XXXXXXXXX/.ssh/authorized_keys:6: key options: agent-forwarding port-forwarding pty user-rc x11-forwarding
Sep 20 13:10:33 container001 sshd[1647842]: Accepted key RSA SHA256:t3DVS5wZmO7DVwqFc41AvwgS5gx1jDWnR89apGmFpf4 found at /home/XXXXXXXXX/.ssh/authorized_keys:6
Sep 20 13:10:33 container001 sshd[1647842]: debug1: restore_uid: 0/0
Sep 20 13:10:33 container001 sshd[1647842]: debug1: auth_activate_options: setting new authentication options
Sep 20 13:10:33 container001 sshd[1647842]: Failed publickey for XXXXXXXXX from xxx.xxx.xxx.xxx port 63579 ssh2: RSA SHA256:t3DVS5wZmO7DVwqFc41AvwgS5gx1jDWnR89apGmFpf4
Sep 20 13:10:39 container001 sshd[1647514]: debug1: Forked child 1648153.
Sep 20 13:10:39 container001 sshd[1648153]: debug1: Set /proc/self/oom_score_adj to 0
Sep 20 13:10:39 container001 sshd[1648153]: debug1: rexec start in 5 out 5 newsock 5 pipe 8 sock 9
Sep 20 13:10:39 container001 sshd[1648153]: debug1: inetd sockets after dupping: 4, 4
The server log seems to agree with the client returned message: first the key
was accepted, then it was refused.
We re-generated a new key. We turned off the windows firewall. We deleted all
the putty settings via the windows registry and re-set them from scratch.
Nothing seemed to work. Then, another windows user reported no problem (and
that user was running putty version 0.74). So the first user downgraded to 0.74
and everything worked fine.
ferm
first", "you need an extra reboot here", or "this is how you finish
the PostgreSQL upgrade".
With Debian getting closer to a 2 year release cycle, with the
previous release being supported basically only one year after the
new stable comes out, I feel more and more strongly that this needs
better automation.
So I'm thinking that I should write a prototype for this. Ubuntu has
do-release-upgrade that is too Ubuntu-specific to be reused. An
attempt at collaborating on this has been mostly met with silence
from Ubuntu's side as well.
I'm thinking that using something like Fabric, Mitogen, or
Transilience: anything that will allow me to write simple,
portable Python code that can run transparently on a local machine
(for single systems upgrades, possibly with a GUI frontend) to remote
servers (for large clusters of servers, maybe with canaries and
grouping using Cumin). I'll note that Koumbit started
experimenting with Puppet Bolt in the bullseye upgrade process,
but that feels too site-specific to be useful more broadly.
Thanks to lelutin and pabs for reviewing a draft of this post.
error: symbol grub_is_lockdown not foundI looked for a solution and it seemed everyone was stuck or the solution was unclear.There is even a bug report in Debian about this error, bug #984760.
update-grub22) reinstall grub-efi-amd64 and make Debian the default
dpkg-reconfigure -plow grub-efi-amd64When reinstalling grub-efi-amd64 onto the disk, I think the scariest questions were to these:
andForce extra installation to the EFI removable media path?
Some EFI-based systems are buggy and do not handle new bootloaders correctly. If you force an extra installation of GRUB to the EFI removable media path, this should ensure that this system will boot Debian correctly despite such a problem. However, it may remove the ability to boot any other operating systems that also depend on this path. If so, you will need to make sure that GRUB is configured successfully to be able to boot any other OS installations correctly.
Update NVRAM variables to automatically boot into Debian?I think the first can be safely answered "No" if you don't plan on booting via a removable USB stick, while the second is the one that does the restoring.The second question is probably safe if you don't use PXE boot or other boot method, at least that's what I understand. But if you do, I suspect by installing refind, by playing with the multiple efi* named packages and tools, you can restore that, or it might be that your BIOS allows that directly.
GRUB can configure your platform's NVRAM variables so that it boots into Debian automatically when powered on. However, you may prefer to disable this behavior and avoid changes to your boot configuration. For example, if your NVRAM variables have been set up such that your system contacts a PXE server on every boot, this would preserve that behavior.
eddy@aptonia:/ $ cat /etc/modulesThe 5.10 compatible driver for RTL8821CE wireless adapterAfter the upgrade to Buster, the oldstable version of the kernel, 4.19, the hacked version of the driver I've been using on Stretch on 4.9 kernels was no longer compatible - failed to compile due to missing symbols.The fix for me was to switch to the DKMS compatible driver from https://github.com/tomaspinho/rtl8821ce, as this seems to work for both 4.19 and 5.10 kernels (installed from backports).
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
r8169
DRV_NAME=rtl8821ceBut you just build and install them only for the select kernel versions of your choice:
DRV_VERSION=v5.5.2_34066.20200325
cp -r . /usr/src/$ DRV_NAME -$ DRV_VERSION
dkms add -m $ DRV_NAME -v $ DRV_VERSION
dkms build -m $ DRV_NAME -v $ DRV_VERSION -k 5.10.0-0.bpo.8-amd64Or, without the variables:
dkms install -m $ DRV_NAME -v $ DRV_VERSION -k 5.10.0-0.bpo.8-amd64
dkms build rtl8821ce/v5.5.2_34066.20200325 -k 4.19.0-17-amd64dkms status should confirm everything is in place and I think you need to update grub2 again after this.
dkms install rtl8821ce/v5.5.2_34066.20200325 -k 4.19.0-17-amd64
:wq
for today.
Date
and Datetime
calculations. The functions asPOSIXlt
and asPOSIXct
convert between long and compact datetime representation, formatPOSIXlt
and Rstrptime
convert to and from character strings, and POSIXlt2D
and D2POSIXlt
convert between Date
and POSIXlt
datetime. Lastly, asDatePOSIXct
converts to a date type. All these functions are rather useful, but were not previously exported by R for C-level use by other packages. Which this package aims to change.
This pair of releases updates the code to the current R-devel standard, and refreshes a few standard packaging aspects starting from making builds on the Windows UCRT platform possible. And while making an accomodation for one beloved architecture (in release 0.0.5), we introduced another issue on another almost equally beloved platform which 0.0.6 clears up. It should be ready and stable now.
Courtesy of my CRANberries, there is are comparisons to the previous release 0.0.5, and 0.0.4, respectively. More information is on the rapidatetime page. For questions or comments please use the issue tracker off the GitHub repo.Changes in RApiDatetime version 0.0.6 (2021-08-13)
- Correctly account for SunOS to have it avoid
GMTOFF
use- A new test file was added to ensure NEWS.Rd is always at the current release version.
Changes in RApiDatetime version 0.0.5 (2021-08-05)
- Add a few
#nocov
tags- Update continuous integration to use r-ci, reenable coverage
- Update DESCRIPTION with URL and BugReports fields
- Add new CI and LastCommitted badges to README.md
- Add compiler flag for Windows UCRT build
- Synchronized datetime function with upstream r-devel code
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
Release highlights
sections in the following release notes:
Tools that have been taken over from / moved to other packages
Debian s util-linux source package provides new binary packages: eject
(and eject-udeb
) and bsdextrautils
. The util-linux implementation of /usr/bin/eject
is used now, replacing the one previously provided by the eject source package.
Overall, from a util-linux perspective the following shifts took place:
col, colcrt, colrm, column
: moved from binary package bsdmainutils to bsdextrautilseject
: moved to binary package ejecthd
: moved from binary package bsdmainutils to bsdextrautilshexdump
: moved from binary package bsdmainutils to bsdextrautilslook
: moved from binary package bsdmainutils to bsdextrautilsul
: moved from binary package bsdmainutils to bsdextrautilswrite(.ul)
: moved from binary package bsdmainutils (named bsd-write) to bsdextrautils/usr/bin/rename.ul
(rename files): use e.g. rename package instead, see #926637 for details regarding the removal/usr/bin/volname
(return volume name for a device formatted with an ISO-9660 file system): use blkid -s LABEL -o value $filename
instead/usr/lib/eject/dmcrypt-get-device
: no replacement available
scriptlive
: re-execute stdin log by a shell in PTY sessionlsirq
+ irqtop
(to monitor kernel interrupts) sadly didn t make it into util-linux s packaging of Debian/bullseye (as without per-CPU data they do not seem mature at this time). The new hardlink
tool (to consolidate duplicate files via hardlinks) won t be shipped, as there s an existing hardlink package already.
New features/options
agetty + getty:
--show-issue display issue file and exitblkdiscard:
--force disable all checkingblkid:
-D, --no-part-details don't print info from partition tableblkzone:
Commands: open Open a range of zones. close Close a range of zones. finish Set a range of zones to Full. Options: -f, --force enforce on block devices used by the systemcfdisk:
--lock[=<mode>] use exclusive device lock (yes, no or nonblock)dmesg:
--noescape don't escape unprintable character -W, --follow-new wait and print only new messagesfdisk:
-x, --list-details like --list but with more details -n, --noauto-pt don't create default partition table on empty devices --lock[=<mode>] use exclusive device lock (yes, no or nonblock)fstrim:
-I, --listed-in <list> trim filesystems listed in specified files --quiet-unsupported suppress error messages if trim unsupportedlsblk:
Options: -E, --dedup <column> de-duplicate output by <column> (for example 'lsblk --dedup WWN' to de-duplicate devices by WWN number, e.g. multi-path devices) -M, --merge group parents of sub-trees (usable for RAIDs, Multi-path) see http://karelzak.blogspot.com/2018/11/lsblk-merge.html New output columns: FSVER filesystem version PARTTYPENAME partition type name DAX dax-capable devicelscpu:
Options: -B, --bytes print sizes in bytes rather than in human readable format -C, --caches[=<list>] info about caches in extended readable format --output-all print all available columns for -e, -p or -C Available output columns for -C: ALL-SIZE size of all system caches LEVEL cache level NAME cache name ONE-SIZE size of one cache TYPE cache type WAYS ways of associativity ALLOC-POLICY allocation policy WRITE-POLICY write policy PHY-LINE number of physical cache line per cache t SETS number of sets in the cache; set lines has the same cache index COHERENCY-SIZE minimum amount of data in bytes transferred from memory to cachelslogins:
--lastlog <path> set an alternate path for lastloglsns:
-t, --type time namespace type time is also supported now (next to mnt, net, ipc, user, pid, uts, cgroup)mkswap:
--lock[=<mode>] use exclusive device lock (yes, no or nonblock)more:
Options: -n, --lines <number> the number of lines per screenful New long options (in addition to the listed equivalent short options): --silent - equivalent to -d --logical - equivalent to -f --no-pause - equivalent to -l --print-over - equivalent to -c --clean-print - equivalent to -p --squeeze - equivalent to -s --plain - equivalent to -umount:
Options: --target-prefix <path> specifies path use for all mountpoints Source: ID=<id> specifies device by udev hardware IDmountpoint:
--nofollow do not follow symlinknsenter:
-T, --time[=<file>] enter time namespacescript:
-I, --log-in <file> log stdin to file -O, --log-out <file> log stdout to file (default) -B, --log-io <file> log stdin and stdout to file -T, --log-timing <file> log timing information to file -m, --logging-format <name> force to 'classic' or 'advanced' format -E, --echo <when> echo input (auto, always or never)sfdisk:
--disk-id <dev> [<str>] print or change disk label ID (UUID) --relocate <oper> <dev> move partition header --move-use-fsync use fsync after each write when move data --lock[=<mode>] use exclusive device lock (yes, no or nonblock)unshare:
-T, --time[=<file>] unshare time namespace --map-user=<uid> <name> map current user to uid (implies --user) --map-group=<gid> <name> map current group to gid (implies --user) -c, --map-current-user map current user to itself (implies --user) --keep-caps retain capabilities granted in user namespaces -R, --root=<dir> run the command with root directory set to <dir> -w, --wd=<dir> change working directory to <dir> -S, --setuid <uid> set uid in entered namespace -G, --setgid <gid> set gid in entered namespace --monotonic <offset> set clock monotonic offset (seconds) in time namespaces --boottime <offset> set clock boottime offset (seconds) in time namespaceswipefs:
--lock[=<mode>] use exclusive device lock (yes, no or nonblock)
Series: | Deep Witches #0 |
Publisher: | A Girl and Her Fed Books |
Copyright: | September 2017 |
ASIN: | B075PHK498 |
Format: | Kindle |
Pages: | 226 |
Next.